A Lagrange Programming Neural Network Approach with an ?0-Norm Sparsity Measurement for Sparse Recovery and Its Circuit Realization

نویسندگان

چکیده

Many analog neural network approaches for sparse recovery were based on using ?1-norm as the surrogate of ?0-norm. This paper proposes an model, namely Lagrange programming with ?p objective and quadratic constraint (LPNN-LPQC), ?0-norm sparsity measurement solving constrained basis pursuit denoise (CBPDN) problem. As is non-differentiable, we first use a differentiable ?p-norm-like function to approximate However, this does not have explicit expression and, thus, locally competitive algorithm (LCA) concept handle nonexistence expression. With LCA approach, dynamics are defined by internal state vector. In proposed thresholding elements conventional in optimization. also circuit realization elements. theoretical side, prove that equilibrium points our method satisfy Karush Kuhn Tucker (KKT) conditions approximated CBPDN problem, asymptotically stable. We perform large scale simulation various algorithms models. Simulation results show better than or comparable several state-of-art numerical algorithms, it

برای دانلود باید عضویت طلایی داشته باشید

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

Chaos Synchronization: a Lagrange Programming Network Approach Running Title - Chaos Synchronization: a Lagrange Programming Network Approach

In this paper we interpret chaos synchronization schemes within the framework of Lagrange programming networks, which form a class of continuous-time optimization methods for solving constrained nonlinear optimization problems. From this study it follows that standard synchronization schemes can be regarded as a Lagrange programming network with soft constraining, where synchronization between ...

متن کامل

Chaos Synchronization: a Lagrange Programming Network Approach

In this paper we interpret chaos synchronization schemes within the framework of Lagrange programming networks, which form a class of continuous-time optimization methods for solving constrained nonlinear optimization problems. From this study it follows that standard synchronization schemes can be regarded as a Lagrange programming network with soft constraining, where synchronization between ...

متن کامل

designing and validating a textbook evaluation questionnaire for reading comprehension ii and exploring its relationship with achievement

در هر برنامه آموزشی، مهم ترین فاکتور موثر بر موفقیت دانش آموزان کتاب درسی است (مک دونو و شاو 2003). در حقیقت ، کتاب قلب آموزش زبان انگلیسی است( شلدن 1988). به دلیل اهمیت والای کتاب به عنوان عنصر ضروری کلاس های آموزش زبان ، کتب باید به دقت ارزیابی و انتخاب شده تا از هرگونه تاثیر منفی بر دانش آموزان جلوگیری شود( لیتز). این تحقیق با طراحی پرسش نامه ارزیابی کتاب که فرصت ارزیابی معتبر را به اساتید د...

15 صفحه اول

Sparse signal recovery with unknown signal sparsity

In this paper, we proposed a detection-based orthogonal match pursuit (DOMP) algorithm for compressive sensing. Unlike the conventional greedy algorithm, our proposed algorithm does not rely on the priori knowledge of the signal sparsity, which may not be known for some application, e.g., sparse multipath channel estimation. The DOMP runs binary hypothesis on the residual vector of OMP at each ...

متن کامل

An efficient modified neural network for solving nonlinear programming problems with hybrid constraints

This paper presents ‎‎the optimization techniques for solving‎‎ convex programming problems with hybrid constraints‎.‎ According to the saddle point theorem‎, ‎optimization theory‎, ‎convex analysis theory‎, ‎Lyapunov stability theory and LaSalle‎‎invariance principle‎,‎ a neural network model is constructed‎.‎ The equilibrium point of the proposed model is proved to be equivalent to the optima...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

ژورنال

عنوان ژورنال: Mathematics

سال: 2022

ISSN: ['2227-7390']

DOI: https://doi.org/10.3390/math10244801